Home:ALL Converter>Is floating point math broken?

Is floating point math broken?

Ask Time:2009-02-26T05:39:02         Author:Cato Johnston

Json Formatter

Consider the following code:

0.1 + 0.2 == 0.3  ->  false
0.1 + 0.2         ->  0.30000000000000004

Why do these inaccuracies happen?

Author:Cato Johnston,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/588004/is-floating-point-math-broken
KernelPanik :

A Hardware Designer's Perspective\n\nI believe I should add a hardware designer’s perspective to this since I design and build floating point hardware. Knowing the origin of the error may help in understanding what is happening in the software, and ultimately, I hope this helps explain the reasons for why floating point errors happen and seem to accumulate over time.\n\n1. Overview\n\nFrom an engineering perspective, most floating point operations will have some element of error since the hardware that does the floating point computations is only required to have an error of less than one half of one unit in the last place. Therefore, much hardware will stop at a precision that's only necessary to yield an error of less than one half of one unit in the last place for a single operation which is especially problematic in floating point division. What constitutes a single operation depends upon how many operands the unit takes. For most, it is two, but some units take 3 or more operands. Because of this, there is no guarantee that repeated operations will result in a desirable error since the errors add up over time.\n\n2. Standards\n\nMost processors follow the IEEE-754 standard but some use denormalized, or different standards\n. For example, there is a denormalized mode in IEEE-754 which allows representation of very small floating point numbers at the expense of precision. The following, however, will cover the normalized mode of IEEE-754 which is the typical mode of operation.\n\nIn the IEEE-754 standard, hardware designers are allowed any value of error/epsilon as long as it's less than one half of one unit in the last place, and the result only has to be less than one half of one unit in the last place for one operation. This explains why when there are repeated operations, the errors add up. For IEEE-754 double precision, this is the 54th bit, since 53 bits are used to represent the numeric part (normalized), also called the mantissa, of the floating point number (e.g. the 5.3 in 5.3e5). The next sections go into more detail on the causes of hardware error on various floating point operations.\n\n3. Cause of Rounding Error in Division\n\nThe main cause of the error in floating point division is the division algorithms used to calculate the quotient. Most computer systems calculate division using multiplication by an inverse, mainly in Z=X/Y, Z = X * (1/Y). A division is computed iteratively i.e. each cycle computes some bits of the quotient until the desired precision is reached, which for IEEE-754 is anything with an error of less than one unit in the last place. The table of reciprocals of Y (1/Y) is known as the quotient selection table (QST) in the slow division, and the size in bits of the quotient selection table is usually the width of the radix, or a number of bits of the quotient computed in each iteration, plus a few guard bits. For the IEEE-754 standard, double precision (64-bit), it would be the size of the radix of the divider, plus a few guard bits k, where k>=2. So for example, a typical Quotient Selection Table for a divider that computes 2 bits of the quotient at a time (radix 4) would be 2+2= 4 bits (plus a few optional bits). \n\n3.1 Division Rounding Error: Approximation of Reciprocal\n\nWhat reciprocals are in the quotient selection table depend on the division method: slow division such as SRT division, or fast division such as Goldschmidt division; each entry is modified according to the division algorithm in an attempt to yield the lowest possible error. In any case, though, all reciprocals are approximations of the actual reciprocal and introduce some element of error. Both slow division and fast division methods calculate the quotient iteratively, i.e. some number of bits of the quotient are calculated each step, then the result is subtracted from the dividend, and the divider repeats the steps until the error is less than one half of one unit in the last place. Slow division methods calculate a fixed number of digits of the quotient in each step and are usually less expensive to build, and fast division methods calculate a variable number of digits per step and are usually more expensive to build. The most important part of the division methods is that most of them rely upon repeated multiplication by an approximation of a reciprocal, so they are prone to error.\n\n4. Rounding Errors in Other Operations: Truncation\n\nAnother cause of the rounding errors in all operations are the different modes of truncation of the final answer that IEEE-754 allows. There's truncate, round-towards-zero, round-to-nearest (default), round-down, and round-up. All methods introduce an element of error of less than one unit in the last place for a single operation. Over time and repeated operations, truncation also adds cumulatively to the resultant error. This truncation error is especially problematic in exponentiation, which involves some form of repeated multiplication.\n\n5. Repeated Operations\n\nSince the hardware that does the floating point calculations only needs to yield a result with an error of less than one half of one unit in the last place for a single operation, the error will grow over repeated operations if not watched. This is the reason that in computations that require a bounded error, mathematicians use methods such as using the round-to-nearest even digit in the last place of IEEE-754, because, over time, the errors are more likely to cancel each other out, and Interval Arithmetic combined with variations of the IEEE 754 rounding modes to predict rounding errors, and correct them. Because of its low relative error compared to other rounding modes, round to nearest even digit (in the last place), is the default rounding mode of IEEE-754.\n\nNote that the default rounding mode, round-to-nearest even digit in the last place, guarantees an error of less than one half of one unit in the last place for one operation. Using the truncation, round-up, and round down alone may result in an error that is greater than one half of one unit in the last place, but less than one unit in the last place, so these modes are not recommended unless they are used in Interval Arithmetic. \n\n6. Summary\n\nIn short, the fundamental reason for the errors in floating point operations is a combination of the truncation in hardware, and the truncation of a reciprocal in the case of division. Since the IEEE-754 standard only requires an error of less than one half of one unit in the last place for a single operation, the floating point errors over repeated operations will add up unless corrected.",
2013-04-18T11:52:32
Joel Coehoorn :

It's broken in the exact same way the decimal (base-10) notation you learned in grade school and use every day is broken, just for base-2.\nTo understand, think about representing 1/3 as a decimal value. It's impossible to do exactly! The world will end before you finish writing the 3's after the decimal point, and so instead we write to some number of places and consider it sufficiently accurate.\nIn the same way, 1/10 (decimal 0.1) cannot be represented exactly in base 2 (binary) as a "decimal" value; a repeating pattern after the decimal point goes on forever. The value is not exact, and therefore you can't do exact math with it using normal floating point methods. Just like with base 10, there are other values that exhibit this problem as well.",
2009-02-25T21:43:07
C. K. Young :

Most answers here address this question in very dry, technical terms. I'd like to address this in terms that normal human beings can understand.\n\nImagine that you are trying to slice up pizzas. You have a robotic pizza cutter that can cut pizza slices exactly in half. It can halve a whole pizza, or it can halve an existing slice, but in any case, the halving is always exact.\n\nThat pizza cutter has very fine movements, and if you start with a whole pizza, then halve that, and continue halving the smallest slice each time, you can do the halving 53 times before the slice is too small for even its high-precision abilities. At that point, you can no longer halve that very thin slice, but must either include or exclude it as is.\n\nNow, how would you piece all the slices in such a way that would add up to one-tenth (0.1) or one-fifth (0.2) of a pizza? Really think about it, and try working it out. You can even try to use a real pizza, if you have a mythical precision pizza cutter at hand. :-)\n\n\n\nMost experienced programmers, of course, know the real answer, which is that there is no way to piece together an exact tenth or fifth of the pizza using those slices, no matter how finely you slice them. You can do a pretty good approximation, and if you add up the approximation of 0.1 with the approximation of 0.2, you get a pretty good approximation of 0.3, but it's still just that, an approximation.\n\nFor double-precision numbers (which is the precision that allows you to halve your pizza 53 times), the numbers immediately less and greater than 0.1 are 0.09999999999999999167332731531132594682276248931884765625 and 0.1000000000000000055511151231257827021181583404541015625. The latter is quite a bit closer to 0.1 than the former, so a numeric parser will, given an input of 0.1, favour the latter.\n\n(The difference between those two numbers is the \"smallest slice\" that we must decide to either include, which introduces an upward bias, or exclude, which introduces a downward bias. The technical term for that smallest slice is an ulp.)\n\nIn the case of 0.2, the numbers are all the same, just scaled up by a factor of 2. Again, we favour the value that's slightly higher than 0.2.\n\nNotice that in both cases, the approximations for 0.1 and 0.2 have a slight upward bias. If we add enough of these biases in, they will push the number further and further away from what we want, and in fact, in the case of 0.1 + 0.2, the bias is high enough that the resulting number is no longer the closest number to 0.3.\n\nIn particular, 0.1 + 0.2 is really 0.1000000000000000055511151231257827021181583404541015625 + 0.200000000000000011102230246251565404236316680908203125 = 0.3000000000000000444089209850062616169452667236328125, whereas the number closest to 0.3 is actually 0.299999999999999988897769753748434595763683319091796875.\n\n\n\nP.S. Some programming languages also provide pizza cutters that can split slices into exact tenths. Although such pizza cutters are uncommon, if you do have access to one, you should use it when it's important to be able to get exactly one-tenth or one-fifth of a slice.\n\n(Originally posted on Quora.)",
2014-11-20T02:39:59
Devin Jeanpierre :

Floating point rounding errors. 0.1 cannot be represented as accurately in base-2 as in base-10 due to the missing prime factor of 5. Just as 1/3 takes an infinite number of digits to represent in decimal, but is \"0.1\" in base-3, 0.1 takes an infinite number of digits in base-2 where it does not in base-10. And computers don't have an infinite amount of memory.",
2009-02-25T21:41:23
Wai Ha Lee :

My answer is quite long, so I've split it into three sections. Since the question is about floating point mathematics, I've put the emphasis on what the machine actually does. I've also made it specific to double (64 bit) precision, but the argument applies equally to any floating point arithmetic.\n\nPreamble\n\nAn IEEE 754 double-precision binary floating-point format (binary64) number represents a number of the form\n\n\n value = (-1)^s * (1.m51m50...m2m1m0)2 * 2e-1023\n\n\nin 64 bits:\n\n\nThe first bit is the sign bit: 1 if the number is negative, 0 otherwise1.\nThe next 11 bits are the exponent, which is offset by 1023. In other words, after reading the exponent bits from a double-precision number, 1023 must be subtracted to obtain the power of two.\nThe remaining 52 bits are the significand (or mantissa). In the mantissa, an 'implied' 1. is always2 omitted since the most significant bit of any binary value is 1.\n\n\n1 - IEEE 754 allows for the concept of a signed zero - +0 and -0 are treated differently: 1 / (+0) is positive infinity; 1 / (-0) is negative infinity. For zero values, the mantissa and exponent bits are all zero. Note: zero values (+0 and -0) are explicitly not classed as denormal2.\n\n2 - This is not the case for denormal numbers, which have an offset exponent of zero (and an implied 0.). The range of denormal double precision numbers is dmin ≤ |x| ≤ dmax, where dmin (the smallest representable nonzero number) is 2-1023 - 51 (≈ 4.94 * 10-324) and dmax (the largest denormal number, for which the mantissa consists entirely of 1s) is 2-1023 + 1 - 2-1023 - 51 (≈ 2.225 * 10-308).\n\n\n\nTurning a double precision number to binary\n\nMany online converters exist to convert a double precision floating point number to binary (e.g. at binaryconvert.com), but here is some sample C# code to obtain the IEEE 754 representation for a double precision number (I separate the three parts with colons (:):\n\npublic static string BinaryRepresentation(double value)\n{\n long valueInLongType = BitConverter.DoubleToInt64Bits(value);\n string bits = Convert.ToString(valueInLongType, 2);\n string leadingZeros = new string('0', 64 - bits.Length);\n string binaryRepresentation = leadingZeros + bits;\n\n string sign = binaryRepresentation[0].ToString();\n string exponent = binaryRepresentation.Substring(1, 11);\n string mantissa = binaryRepresentation.Substring(12);\n\n return string.Format(\"{0}:{1}:{2}\", sign, exponent, mantissa);\n}\n\n\n\n\nGetting to the point: the original question\n\n(Skip to the bottom for the TL;DR version)\n\nCato Johnston (the question asker) asked why 0.1 + 0.2 != 0.3.\n\nWritten in binary (with colons separating the three parts), the IEEE 754 representations of the values are:\n\n0.1 => 0:01111111011:1001100110011001100110011001100110011001100110011010\n0.2 => 0:01111111100:1001100110011001100110011001100110011001100110011010\n\n\nNote that the mantissa is composed of recurring digits of 0011. This is key to why there is any error to the calculations - 0.1, 0.2 and 0.3 cannot be represented in binary precisely in a finite number of binary bits any more than 1/9, 1/3 or 1/7 can be represented precisely in decimal digits.\n\nAlso note that we can decrease the power in the exponent by 52 and shift the point in the binary representation to the right by 52 places (much like 10-3 * 1.23 == 10-5 * 123). This then enables us to represent the binary representation as the exact value that it represents in the form a * 2p. where 'a' is an integer.\n\nConverting the exponents to decimal, removing the offset, and re-adding the implied 1 (in square brackets), 0.1 and 0.2 are:\n\n0.1 => 2^-4 * [1].1001100110011001100110011001100110011001100110011010\n0.2 => 2^-3 * [1].1001100110011001100110011001100110011001100110011010\nor\n0.1 => 2^-56 * 7205759403792794 = 0.1000000000000000055511151231257827021181583404541015625\n0.2 => 2^-55 * 7205759403792794 = 0.200000000000000011102230246251565404236316680908203125\n\n\nTo add two numbers, the exponent needs to be the same, i.e.:\n\n0.1 => 2^-3 * 0.1100110011001100110011001100110011001100110011001101(0)\n0.2 => 2^-3 * 1.1001100110011001100110011001100110011001100110011010\nsum = 2^-3 * 10.0110011001100110011001100110011001100110011001100111\nor\n0.1 => 2^-55 * 3602879701896397 = 0.1000000000000000055511151231257827021181583404541015625\n0.2 => 2^-55 * 7205759403792794 = 0.200000000000000011102230246251565404236316680908203125\nsum = 2^-55 * 10808639105689191 = 0.3000000000000000166533453693773481063544750213623046875\n\n\nSince the sum is not of the form 2n * 1.{bbb} we increase the exponent by one and shift the decimal (binary) point to get:\n\nsum = 2^-2 * 1.0011001100110011001100110011001100110011001100110011(1)\n = 2^-54 * 5404319552844595.5 = 0.3000000000000000166533453693773481063544750213623046875\n\n\nThere are now 53 bits in the mantissa (the 53rd is in square brackets in the line above). The default rounding mode for IEEE 754 is 'Round to Nearest' - i.e. if a number x falls between two values a and b, the value where the least significant bit is zero is chosen.\n\na = 2^-54 * 5404319552844595 = 0.299999999999999988897769753748434595763683319091796875\n = 2^-2 * 1.0011001100110011001100110011001100110011001100110011\n\nx = 2^-2 * 1.0011001100110011001100110011001100110011001100110011(1)\n\nb = 2^-2 * 1.0011001100110011001100110011001100110011001100110100\n = 2^-54 * 5404319552844596 = 0.3000000000000000444089209850062616169452667236328125\n\n\nNote that a and b differ only in the last bit; ...0011 + 1 = ...0100. In this case, the value with the least significant bit of zero is b, so the sum is:\n\nsum = 2^-2 * 1.0011001100110011001100110011001100110011001100110100\n = 2^-54 * 5404319552844596 = 0.3000000000000000444089209850062616169452667236328125\n\n\nwhereas the binary representation of 0.3 is:\n\n0.3 => 2^-2 * 1.0011001100110011001100110011001100110011001100110011\n = 2^-54 * 5404319552844595 = 0.299999999999999988897769753748434595763683319091796875\n\n\nwhich only differs from the binary representation of the sum of 0.1 and 0.2 by 2-54.\n\nThe binary representation of 0.1 and 0.2 are the most accurate representations of the numbers allowable by IEEE 754. The addition of these representation, due to the default rounding mode, results in a value which differs only in the least-significant-bit.\n\nTL;DR\n\nWriting 0.1 + 0.2 in a IEEE 754 binary representation (with colons separating the three parts) and comparing it to 0.3, this is (I've put the distinct bits in square brackets):\n\n0.1 + 0.2 => 0:01111111101:0011001100110011001100110011001100110011001100110[100]\n0.3 => 0:01111111101:0011001100110011001100110011001100110011001100110[011]\n\n\nConverted back to decimal, these values are:\n\n0.1 + 0.2 => 0.300000000000000044408920985006...\n0.3 => 0.299999999999999988897769753748...\n\n\nThe difference is exactly 2-54, which is ~5.5511151231258 × 10-17 - insignificant (for many applications) when compared to the original values.\n\nComparing the last few bits of a floating point number is inherently dangerous, as anyone who reads the famous \"What Every Computer Scientist Should Know About Floating-Point Arithmetic\" (which covers all the major parts of this answer) will know.\n\nMost calculators use additional guard digits to get around this problem, which is how 0.1 + 0.2 would give 0.3: the final few bits are rounded.",
2015-02-23T17:15:35
Daniel Vassallo :

In addition to the other correct answers, you may want to consider scaling your values to avoid problems with floating-point arithmetic. \n\nFor example: \n\nvar result = 1.0 + 2.0; // result === 3.0 returns true\n\n\n... instead of:\n\nvar result = 0.1 + 0.2; // result === 0.3 returns false\n\n\nThe expression 0.1 + 0.2 === 0.3 returns false in JavaScript, but fortunately integer arithmetic in floating-point is exact, so decimal representation errors can be avoided by scaling.\n\nAs a practical example, to avoid floating-point problems where accuracy is paramount, it is recommended1 to handle money as an integer representing the number of cents: 2550 cents instead of 25.50 dollars. \n\n\n\n1 Douglas Crockford: JavaScript: The Good Parts: Appendix A - Awful Parts (page 105).",
2010-04-09T12:25:09
Mark Ransom :

Floating point numbers stored in the computer consist of two parts, an integer and an exponent that the base is taken to and multiplied by the integer part.\n\nIf the computer were working in base 10, 0.1 would be 1 x 10⁻¹, 0.2 would be 2 x 10⁻¹, and 0.3 would be 3 x 10⁻¹. Integer math is easy and exact, so adding 0.1 + 0.2 will obviously result in 0.3.\n\nComputers don't usually work in base 10, they work in base 2. You can still get exact results for some values, for example 0.5 is 1 x 2⁻¹ and 0.25 is 1 x 2⁻², and adding them results in 3 x 2⁻², or 0.75. Exactly.\n\nThe problem comes with numbers that can be represented exactly in base 10, but not in base 2. Those numbers need to be rounded to their closest equivalent. Assuming the very common IEEE 64-bit floating point format, the closest number to 0.1 is 3602879701896397 x 2⁻⁵⁵, and the closest number to 0.2 is 7205759403792794 x 2⁻⁵⁵; adding them together results in 10808639105689191 x 2⁻⁵⁵, or an exact decimal value of 0.3000000000000000444089209850062616169452667236328125. Floating point numbers are generally rounded for display.",
2016-03-16T05:27:16
Muhammad Musavi :

In short it's because:\n\nFloating point numbers cannot represent all decimals precisely in binary\n\nSo just like 10/3 which does not exist in base 10 precisely (it will be 3.33... recurring), in the same way 1/10 doesn't exist in binary.\nSo what? How to deal with it? Is there any workaround?\nIn order to offer The best solution I can say I discovered following method:\nparseFloat((0.1 + 0.2).toFixed(10)) => Will return 0.3\n\nLet me explain why it's the best solution.\nAs others mentioned in above answers it's a good idea to use ready to use Javascript toFixed() function to solve the problem. But most likely you'll encounter with some problems.\nImagine you are going to add up two float numbers like 0.2 and 0.7 here it is: 0.2 + 0.7 = 0.8999999999999999.\nYour expected result was 0.9 it means you need a result with 1 digit precision in this case.\nSo you should have used (0.2 + 0.7).tofixed(1)\nbut you can't just give a certain parameter to toFixed() since it depends on the given number, for instance\n0.22 + 0.7 = 0.9199999999999999\n\nIn this example you need 2 digits precision so it should be toFixed(2), so what should be the paramter to fit every given float number?\nYou might say let it be 10 in every situation then:\n(0.2 + 0.7).toFixed(10) => Result will be "0.9000000000"\n\nDamn! What are you going to do with those unwanted zeros after 9?\nIt's the time to convert it to float to make it as you desire:\nparseFloat((0.2 + 0.7).toFixed(10)) => Result will be 0.9\n\nNow that you found the solution, it's better to offer it as a function like this:\nfunction floatify(number){\n return parseFloat((number).toFixed(10));\n }\n \n\nLet's try it yourself:\n\r\n\r\nfunction floatify(number){\n return parseFloat((number).toFixed(10));\n }\n \nfunction addUp(){\n var number1 = +$(\"#number1\").val();\n var number2 = +$(\"#number2\").val();\n var unexpectedResult = number1 + number2;\n var expectedResult = floatify(number1 + number2);\n $(\"#unexpectedResult\").text(unexpectedResult);\n $(\"#expectedResult\").text(expectedResult);\n}\naddUp();\r\ninput{\n width: 50px;\n}\n#expectedResult{\ncolor: green;\n}\n#unexpectedResult{\ncolor: red;\n}\r\n<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js\"></script>\n<input id=\"number1\" value=\"0.2\" onclick=\"addUp()\" onkeyup=\"addUp()\"/> +\n<input id=\"number2\" value=\"0.7\" onclick=\"addUp()\" onkeyup=\"addUp()\"/> =\n<p>Expected Result: <span id=\"expectedResult\"></span></p>\n<p>Unexpected Result: <span id=\"unexpectedResult\"></span></p>",
2018-08-07T09:34:15
yy